04. Activation Functions
L4 04 Activation Functions
- Further reading on activation functions.
Backpropagation & Updating the Weights of a Network
As a neural network trains, it starts off randomly guessing appropriate weights and will often produce the incorrect outputs, with some error that can be measured as the difference between a model-output value and a true output label or value. For a model to improve, it needs a way to:
- Identify the source of its error (which model weights are responsible for the error?)
- Update those weights to a new, better value
The first step, of identifying the weights causing an output error, is referred to as backpropagation. Essentially, backpropagation looks at the output of a model and goes backwards through the nodes and layers of a network to find the source or error.
Then, there is a second step, updating the weights such that their value is either increased or decreased in response to the error they caused.
The cycle of backpropagation and updating weights continues until a model is trained or until it has sufficiently low error!